perm filename MINIMA.NEW[S79,JMC] blob
sn#456669 filedate 1979-07-07 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00009 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "memo.pub[let,jmc]" source
C00003 00003 #. Introduction
C00011 00004 .bb Missionaries and Cannibals
C00025 00005 .bb Co-ordinate systems and propositional calculus circumscription.
C00052 00006 .bb Circumscription in the Blocks World
C00057 00007 .bb Comparison with other formalisms
C00058 00008 .deb←true
C00070 00009
C00071 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source
.item←0
.at "qi" ⊂"%8r%*"⊃
.at "qe" ⊂"%8q%*"⊃
.font C "zero30"
.at "Goedel" ⊂"G%C:%*odel"⊃
.at "qF" ⊂"%AF%*"⊃
.turn on "_" for "#"
.turn on "α"
.turn on "↑"
#. Introduction
(McCarthy 1959) proposed a program with "common sense" that
would express its knowledge about the world in general and about
the particular problem in a suitable logical language. It would then
reason in this language attempting to derive a sentence of the form
%2I should do X%1 after which it would do ⊗X. This approach was further
developed in (McCarthy 1963) and (McCarthy and Hayes 1969), but by
1969 it was already clear that something more than deduction is necessary
in order to express and use common sense knowledge. Further analysis
showed that what was missing was the ability to jump to the conclusion
that the facts taken into account are sufficient to solve a problem.
The best way of expressing this ability is still problematical, and this paper
presents several related candidates.
Even when a program does not reach its conclusions by manipulating
sentences in a formal language, we can often profitably analyze its
behavior by considering it to %2believe%1 certain sentences when it is
in certain states, and we can study how these %2ascribed beliefs%1 change
with time. See (McCarthy 1979). When we do this,
we will again discover that successful
programs must jump to conclusions.
All the well known systems of mathematical logic have the
following %2monotonicity property%1. If a sentence ⊗p follows
from a collection ⊗A of sentences and %2A_⊂_B%1, then ⊗p follows
from ⊗B. In the notation of proof theory: if %2A_qi_p%1 and
%2A_⊂_B%1, then %2B_qi_p%1.
Since a proof from the premisses ⊗A is a
sequence of sentences each of which is a either a premiss, an axiom or
follows from a subset of the sentences occurring earlier in the
proof by one of the rules of inference, a proof from ⊗A can also
serve as a proof from ⊗B assuming %2A_⊂_B%1.
The semantic notion of entailment is also monotonic; we say that
⊗A entails ⊗p (written %2A_qe_p%1)
if ⊗p is true in all models of ⊗A. But if %2A_qe_p%1 and
%2A_⊂_B%1, then every model of ⊗B is also a model of
⊗A, which shows that %2B_qe_p%1.
Some AI reasoning systems are non-monotonic, because they
contain rules of inference that not only require the presence of
premiss sentences in the data base matching their patterns
but require the absence of sentences matching certain other patterns.
.if false then begin
%2Circumscription induction%1 is a formalized %2rule of conjecture%1
that can be used along with the %2rules of inference%1 of first order logic.
A computer program can use circumscription induction to conjecture that
the known entities of a given kind are all there are.
.end
.if true then begin
%2Circumscription%1 provides a collection of non-monotonic
formalized %2rules of
conjecture%1 that can be used along with the %2rules of inference%1 of first
order logic. %2Domain circumscription%1 conjectures that the "known" entities
of a given kind are all there are. %2Predicate circumscription assumes that
entities satisfy a given predicate only if they have to on the basis of
a collection of facts.
.end
We will show by
examples that humans use such "non-monotonic" reasoning and that it is
required for intelligent behavior. The default case reasoning of many
computer programs and the use of THNOT in PLANNER programs are also
examples
of non-monotonic reasoning, but possibly of a different kind from those
discussed in this paper.
The result of applying domain circumscription to a
collection ⊗A of facts is a sentence schema that essentially
asserts that the only objects satisfying a predicate ⊗P(x) are
those whose existence follows from the sentences of ⊗A.
Since adding more sentences might require the existence of more
objects, circumscription is not monotonic.
Conclusions derived from circumscription may be regarded as
consequences of conjecturing that ⊗A includes all the relevant
facts. An intelligent program might make such a conjecture
make a plan on the basis of the conclusions reached. It might
immediately carry out the plan, or be more cautious and look for
additional facts that might require modifying it. If it comes to
reject the circumscription conjecture, it will normally have to
circumscribe again on the basis of additional facts.
It will be necessary to
%2assume%1 that everything relevant has finally been taken into account,
except in mathematical problems, where it can often be proved
from the original statement of the problem.
The result of applying predicate circumscription to a collection
⊗A of sentences and a predicate ⊗P occurring in them is a sentence schema
that essentially asserts that the only objects satisfying ⊗P are those
that have to on the basis of the sentences ⊗A.
Before introducing the formalism, we present an informal example
showing the need for non-monotonic reasoning.
.bb Missionaries and Cannibals
The much used %2missionaries and cannibals%1 puzzle
contains enough detail to illustrate
many of the issues.
%2"Three missionaries and three cannibals come to a river. A
rowboat that seats two is available. If the cannibals ever
outnumber the missionaries on either bank of the river, the
missionaries will be eaten. How shall they cross the river?"%1.
Obviously the puzzler is expected to devise a strategy of rowing
the boat back and forth that gets them all across and avoids the disaster.
Amarel (1971) considered several representations of the problem
and discussed criteria whereby the following representation is preferred
for purposes of AI, because it leads to the smallest state space that must
be explored to find the solution. A state is a triple
comprising the numbers of missionaries, cannibals and boats on the starting
bank of the river. The initial state is 331,
the desired final state is 000, and one solution is given by the
sequence (331,220,321,300,311,110,221,020,031,010,021,000).
We are not presently concerned with the heuristics of the problem
but rather with the correctness of the reasoning that goes from the
English statement of the problem to Amarel's state space representation.
A generally intelligent computer program should be able to
carry out this reasoning. Of
course, there are the well known difficulties in making computers
understand English, but suppose the English sentences describing the
problem have already been rather directly translated into first order
logic. The correctness of Amarel's representation is not an ordinary
logical consequence of these sentences for two further reasons.
First, nothing has been stated about the properties of boats or
even the fact that rowing across the river doesn't change the numbers of
missionaries or cannibals or the capacity of the boat. Indeed it hasn't
been stated that situations change as a result of action. These facts follow
from common sense knowledge, so let us imagine that common sense
knowledge, or at least the relevant part of it, is also expressed in first
order logic.
The second reason we can't ⊗deduce the propriety of Amarel's
representation is deeper. Imagine giving someone the problem, and after
he puzzles for a while, he suggests going upstream half a mile and
crossing on a bridge. "What bridge", you say. "No bridge is mentioned in
the statement of the problem." And this dunce replies, "Well, they don't
say there isn't a bridge". You look at the English and even at
the translation of the English into first order logic, and you must admit
that "they don't say" there is no bridge. So you modify the problem
to exclude bridges
and pose it again, and the dunce proposes a helicopter, and after you
exclude that, he proposes a winged horse or that the others hang onto the
outside of the boat while two row.
You now see that while a dunce, he is
an inventive dunce. Despairing of getting him to accept the problem
in the proper puzzler's spirit, you tell him the solution. To your further
annoyance, he attacks your solution on the grounds that the boat might
have a leak or lack
oars. After you rectify that omission from the statement
of the problem, he suggests that a sea
monster may swim up the river and may swallow the boat. Again you are
frustrated, and you look for a mode of reasoning that will settle his hash
once and for all.
In spite of our irritation with the dunce, it would be
cheating to put into the statement of the problem
that there is no other way to cross the river
than using the boat and that nothing can go wrong with the boat.
A human doesn't need such an ad hoc narrowing of the problem, and indeed
the only watertight way to do it might amount to specifying the Amarel
representation in English. Rather we want to avoid the excessive
qualification and get the Amarel representation by common sense reasoning
as humans ordinarily do.
Circumscription is one candidate for accomplishing this.
It will allow us to conjecture that no relevant entities exist except those
whose existence follows from the statement of the problem and common
sense knowledge.
When we ⊗circumscribe the first order logic statement of the problem together
with the common sense facts about boats etc., we will
be able to conclude that there is
no bridge or helicopter. "Aha", you say, "but there won't be any oars
either". No, we get out of that as follows: It is a part of common
knowledge that a boat can be used to cross a river %2unless there is
something wrong with it or something else prevents using it%1, and if our
facts don't require that there be something that prevents crossing the river,
circumscription will generate the conjecture that there isn't.
The price is introducing as entities in our language the "somethings" that
may prevent the use of the boat.
If the statement of the problem were extended to mention a bridge,
then the circumscription of the problem statement would no longer permit
showing the non-existence of a bridge, i.e. a conclusion that can be drawn
from a smaller collection of facts can no longer be drawn from a larger.
This non-monotonic character of circumscription is just what we want for
this kind of problem. The statement, %2"There is a bridge a mile
upstream, and the boat has a leak."%1
doesn't contradict the text of the problem, but its addition
invalidates the Amarel representation.
In the usual sort of puzzle, there is a convention that there is
no additional objects beyond those mentioned in the puzzle or whose
existence is deducible from the puzzle and common sense knowledge.
The convention can be explicated as applying circumscription
to the puzzle statement and a certain part of common sense knowledge.
However,
if one really were sitting by a river bank and these six people came
by and posed their problem, one wouldn't take the circumscription
for granted, but one would consider the result of circumscripion as a
hypothesis. Thus circumscription must be used as a mode of conjecture.
At first I considered this solution ad hoc,
but I am now convinced that it is a correct explication of the use of
%2normality%1; i.e. unless something abnormal exists, an object can
be used to carry out its normal function.
Some have suggested that the difficulties might be avoided by
introducing probabilities. They suggest that the existence of a bridge is
improbable. Well, the whole situation involving cannibals with the
postulated properties is so improbable that it is hard to take seriously
the conditional probability of a bridge given the hypotheses. Moreover,
we mentally propose to ourselves the normal non-bridge non-sea-monster
interpretation ⊗before considering these extraneous possibilities, let
alone their probabilities, i.e. we usually don't even introduce the sample
space in which these possibilities are assigned whatever probabilities one
might consider them to have. Therefore, regardless of our knowledge of
probabilities, we need a way of formulating the normal situation from the
statement of the facts, and non-monotonic reasoning seems to be required.
Using circumscription requires that common sense knowledge
be expressed
in a form that says a boat can be used to cross rivers unless there is
something wrong with it. In particular, it looks like we must introduce
into
our ⊗ontology (the things that exist) a category that includes
%2something wrong with a boat%1 or a category that includes
%2something that may prevent its use%1.
Incidentally, once we have decided to admit %2something wrong with the boat%1,
we are inclined to admit a %2lack of oars%1 as such a something and to ask
questions like, %2"Is a lack of oars all that is wrong with the boat?%1".
I imagine that many philosophers
and scientists would be reluctant to introduce such ⊗things, but the
fact that ordinary language allows %2"something wrong with the boat"%1
suggests that we shouldn't be hasty in excluding it. Making a suitable
formalism may be technically difficult and philosophically problematical,
but we must try.
We challenge anyone who thinks he can avoid such entities to
express in his favorite formalism, %2"Besides leakiness, there is something
else wrong with the boat"%1. A good solution would avoid counterfactuals
as this one does.
We hope circumscription will help understand natural language,
because if the use of natural language involves something like
circumscription, it is understandable that the expression of general
common sense facts into natural language will seem implausible.
.bb Co-ordinate systems and propositional calculus circumscription.
Suppose we want to establish a language in which to express some
class of assertions. Deciding on predicate and function symbols and their
meanings in order to express these assertions is like choosing a co-ordinate
system for a physical space. For example, in the blocks world one may
express everything in terms of the relation ⊗on(x,y,s) meaning that block
⊗x is on block ⊗y in situation ⊗s, but one might also use the relation
⊗above(x,y,s) that would be satisfied if there were intervening blocks
in between ⊗x and ⊗y. The same assertions about blocks can be made in
in either language. In particular, each can be defined in terms of the
other, and equivalent sets of axioms can be written.
(I don't know of any general study of "logical co-ordinate systems" and
their relations and transformations. It will certainly be more complex
than the theory of linear transformations of vector spaces).
Logical co-ordinate systems that are equivalent in terms of what
can be expressed may be inequivalent heuristically.
Conjectures that are natural in one system may seem contrived in others.
In particular,
circumscription yields different conjectures in different co-ordinate
systems. A propositional calculus
version of circumscription illustrates some of the issues.
A propositional language for expressing some class
of assertions is established by choosing
propositional letters %2p%41%2,...,p%4n%1 and regarding each of them as
expressing some elementary fact.
We may then express the set ⊗A of known
facts as a sentence ⊗AX in terms of the %2p%1's. A model of the facts
is any assignment of truth values to the %2p%4i%1's that makes ⊗AX true.
As a simple example, we may be thinking
of assertions ⊗p, ⊗q, and ⊗r related by %2p_≡_(q_≡_r)%1.
and satisfying %2p_∨_r%1.
One co-ordinate system would take %2p%41%2_=_p%1 and %2p%42%2_=_q%1,
and another would take %2p%41%2'_=_q%1 and %2p%42%2'_=_¬(p_≡_r)%1.
In one system we would have the axiom %2p%41%2
***********
Let ⊗M1 and ⊗M2 be models of ⊗AX. We define
%2M1 ≤ M2 ≡ ∀i.(true(p%4i%2,M2) ⊃ true(p%4i%2,M1))%1.
A minimal model is one that is
not a proper extension, i.e. a model in which (roughly speaking) as many
of the %2p%4i%1s as possible are false - consistent with the truth
of %2AX%1. %2Note that while the models of a set of facts are
independent of the co-ordinate system, the minimal models depend on the
co-ordinate system%1. Here is the most trivial example.
Let there be one letter ⊗p representing the one fact in a
co-ordinate system ⊗C, and let ⊗A be the null set of facts so that ⊗AX is
the sentence ⊗T (identically true). The one other co-ordinate system ⊗C'
uses the elementary sentence %2p'%1 taken equivalent to %2¬p%1.
The minimal model of ⊗C is
α{¬pα}, and the minimal model of ⊗C' is α{pα}.
Since minimal reasoning depends on the co-ordinate system and not merely
on the facts expressed in the axioms, the utility of an axiom system
will depend on whether its minimal models express the uses we wish to
make of Occam's razor.
A less trivial example of minimal entailment is the following.
The propositional co-ordinate system has basic sentences %2p%41%1, %2p%42%1
and %2p%43%1, and %2AX = p%41%2 ∨ p%42%1. There are two minimal models.
%2p%43%1 is false in
both, and %2p%41%1 is false in one, and %2p%42%1 is false in the other.
Thus we have %2AX_%8q%4m%2_(¬(p%41%2_∧_p%42%2)_∧_¬p%43%1).
Note that if ⊗AX is a conjunction of certain elementary sentences
(some of them negated), e.g. %2AX_=_p%42%2_∧_¬p%44%1, then there is a unique
minimal model, but if, as above, %2AX_=_p%42%2_∨_¬p%44%1, then there
is more than one minimal model.
One may also consider extensions %2relative%1 to a set ⊗B of
elementary sentences. We have
%2M1 ≤ M2 (mod B) ≡ ∀p.(p ε B ∧ true(p,M1) ⊃ true(p,M2)%1.
We can then introduce minimal models relative to ⊗B and the corresponding
relative minimal entailment. Thus we may care about minimality and
Occam's razor only over a certain part of our knowledge.
Note that we may discover a set of facts one at a time so that we
have an increasing sequence %2A%41%2 ⊂ A%42%1 ⊂ ... of sets of sentences.
A given sentence ⊗p may oscillate in its minimal entailment by the %2A%4i%1,
e.g. we may have %2A%41%8_q%4m%2_p%1, %2A%42%8_q%4m%2_¬p%1, neither may be
entailed by %2A%43%1, etc. The common sense Occam's razor conclusion
about a sentence ⊗p often behaves this way as our knowledge increases.
Reasons can be used to reduce propositional circumscription to
domain circumscription. We introduce for each propositional letter
%2p%4i%1 to be minimized a corresponding predicate %2pr%4i%2(r)%1 and the
axiom
!!g1: %2p%4i%2 ≡ ∃r.pr%4i%2(r)%1,
which asserts that %2p%4i%1 is true only if there is a reason why.
We also need axioms
!!g2: %2∀r.¬(pr%4i%2(r) ∧ pr%4j%2(r))%1
for each ⊗i and ⊗j and axioms asserting that reasons ⊗r are distinct from
any other individuals. A minimal model will then have as few reasons as
possible and hence will minimize the set of true %2p%4i%1s.
.skip to column 1
.bb Circumscription in the Blocks World
Here is a formalization of the blocks world in which circumscription
is used to keep open the set of possible circumstances that may
prevent a block ⊗x from being moved onto a block ⊗y. We use the
situation calculus of (McCarthy and Hayes 1969).
A situation ⊗s is a snapshot of the world at an instant of
time. We write ⊗on(x,y,s) to say that block ⊗x is on block ⊗y in
situation ⊗s. %2s'_=_move(x,y,s)%1 is the new situation ⊗s' that results
the action of moving block ⊗x onto block ⊗y in situation ⊗s is attempted.
If the preconditions for the action to be successful are satisfied, we
will have %2on(x,y,s')%1.
We begin with a static world, in which we want to use
circumscription to avoid stating more on-ness relations than
necessary. Specifically, we would like circumscription to
give us
1. A block is clear unless contrary evidence exists.
2. A block is on the table unless contrary evidence exists.
3. Only those blocks exist for which there is evidence.
Suppose we have sentences ⊗on(A,B), ⊗on(B,C) and ⊗on(D,E), where
⊗A, ... , ⊗E are specific blocks. We have the axiom
%2∀x.(x ≠ Table ⊃ ∃y.on(x,y))%1
asserting that every block except the table is supported. The
axiom schema
%2qF(A,B) ∧ qF(B,C) ∧ qF(D,E) ⊃ ∀x y.(on(x,y) ∧ y ≠ Table ⊃ qF(x,y))%1
insures that the only ⊗on(x,y) relations are those explicitly stated except
that all blocks not mentioned as on something else are on the table.
The effect of motion is given by
%2∀x y s.(movable(x,y,s) ⊃ on(x,y,move(x,y,s))%1.
There are various known causes of immovability. Thus
we may have
%2∀x y z s.(on(z,x,s) ⊃ ¬movable(x,y,s))%1,
%2∀x y z s.(on(z,y,s) ⊃ ¬movable(x,y,s))%1
and
%2∀x y s.(tooheavy x ⊃ ¬movable(x,y,s))%1.
If these are all the causes of immovability that we wish to take into
account, we may combine the above into a single sentence
!!e1: %2∀s x y z.(on(z,x,s) ∨ on(z,y,s) ∨ tooheavy x ⊃ ¬movable(x,y,s))%1.
The converse of ({eq e1}) may be written as a circumscription in the form
%2∀s.[∀x y z.[on(z,x,s) ∨ on(z,y,s) ∨ tooheavy x ⊃ qF(x,y,s)] ⊃
⊃ ∀x y.[¬movable(x,y,s) ⊃ qF(x,y,s)]]%1.
Of course, the converse can be written directly as
!!e2: %2∀s x y z.(¬movable(x,y,s) ⊃ on(z,x,s) ∨ on(z,y,s) ∨ tooheavy x)%1.
Thus circumscription is a kind of converse and requires a schema to express it
only when the set of objects to be circumscribed is inductively defined.
.bb Comparison with other formalisms
1. Default cases. Many programs provide default cases.
.deb←true
.if deb then begin
.at "qpi" ⊂"%2p%4i%2"⊃
.at "qpj" ⊂"%2p%4j%2"⊃
1. Maybe we can consider propositional and domain circumscription as instances
of a common concept if we allow ourselves to introduce an arbitrary
partial ordering of models of sets of sentences and consider minimal
models with respect to this ordering. We probably want to restrict the
ordering so that there will be some syntactic counterpart of minimal
inference. Actually propositional circumscription doesn't meet this
requirement without modification, but anyway it's desirable.
2. Perhaps the right way to reduce propositional circumscription to
domain circumscription is to replace each propositional variable
%2p%4i%1 by the wff %2∃x.(index x = i)%1. In that way minimizing
the truth of the proposition is accomplished by minimizing the existence
of the %2x%1's. The purpose of %2index x = i%1 is to make sure that the
existence of the %2x%1's can be minimized separately. We need to be
sure that no other object has an integer index, but we can introduce
axioms that require them all to have the non-integer JUNK as index.
This raises the problem of whether there is an analogous way of
treating predicates %2p(u,v)%1. Maybe we want to replace it by
%2∃x.(p'(u,v,x)_∧_index_x_=_i)%1 or perhaps istead of %2_=_i%1, we
should have %2_=_<u,v,i>%1.
3. CIRCUM[1,CLT] is the following example:
Consider the following problem (told to CLT by DrewMcD, ascribed to Bmoore):
You are told that the only things on the table are things that you can prove
are on the table. Further you are told that A is on the table and B is on
the table and that there is something red on the table. You would like to
be able to conclude that either A is red or B is red.
Circumscription can be applied to solve this problem. We formalize the
problem in a language L=<<Ontable,Red>,<>,<A,B>> where Ontable and Red
are unary. The theory is:
(1) Ontable(A)
(2) Ontable(B)
(3) ∃x.(Ontable(x)∧Red(x))
Circumscribing with respect to (1) and (2) we obtain the schema:
(4) qF(A)∧qF(B) ⊃ ∀x.qF(x)
and it is easy to jump to the conclusion that
(5) ∀x.(x=A∨x=B)
and thus to conclude that Red(A)∨Red(B).
Notice that if the circumscription is done with respect to all three facts
then we lose. Namely we get the schema
(6) qF(A)∧qF(B)∧∃x.(qF(x)∧Ontable(x)∧Red(x)) ⊃ ∀x.qF(x)
then we only conclude
(7) ∀x.(x=A∨x=B∨Red(x))
which does not allow us to show that either A or B is red.
4. The essence of the Moore-McDermott example is included in the
following simpler example.
There is one constant A and one sentence ∃x.Red(x). Circumscribing
on the constant only gives qF(A) ⊃ ∀x.qF(x) and specializing qF
by ∀x.(qF(x) ≡ x = A) establishes ∀x.(x = A) from which Red(A) follows.
Circumscribing on both A and ∃x.Red(x) gives
qF(A) ∧ ∃x.(qF(x) ∧ Red(x)) ⊃ ∀x.qF(x)
from which it doesn't look like we can infer Red(A).
It looks like circumscription doesn't give us what is true
in all minimal models but only what is true in all minimal models
contained in arbitrary models, i.e. a sentence p is provable by
circumscribing a set of sentences A iff every model M of A
is such that every minimal model M' of A contained in M satisfies p.
THE ABOVE WAS A MISTAKE. At least in the above case the model
with two objects - A and a red one - is indeed minimal w/r
α{A , ∃x.Red(x)α}.
5. Imitation semantics. Suppose we replace some of the predicate symbols
of a theory by a new symbols with one more argument - a world. Thus
Red(x) is replaced by Red(x,w). Some of the properties of model theory
can now be axiomatized within the theory. The process can be repeated by
adding another "outer" world parameter, but we shall discuss only a single
such "semanticization" to begin with.
Every term and wff now depends on w - assuming, as we shall for now,
that we apply each predicate to the same variable w as we build up the
wffs. We can quantify over w, so that ∀w.<wff> will enjoy some of the
properties of validity and ∃w.<wff> will have some of the properties of
satisfiability.
How to axiomatize w is not immediately obvious. One idea is to
imagine a common domain of non-w s and introduce a predicate
exists(x,w). (I confess a weakness for theories in which something
like existence is a predicate). This allows a definition of ordering
.begin nofill
%2w1 < w2 ≡ ∀x.(exists(x,w1) ⊃ exists(x,w2))
∧ ∀xy.(exists(x,w1) ∧ exists(y,w1) ⊃ (P(x,y,w1) ≡ P(x,y,w2))
∧ ...%1
.end
where ... indicates that there is a wff similar to that on the second line
for each predicate in the language.
This will permit us to define minimal worlds.
Our ability to imitate real model theory will depend on our tools
for asserting the existence of enough worlds. If the domain is infinite
and there is even one unary predicate symbol, the number of interpretations
is already non-denumerable, and the Lowenheim-Skolem theorem already tells
us that no axiomatization can insure that.
A certain amount can be done by postulating that for every world
w there is a world w' that gives the same value to all predicates except
that a single predicate has a single value changed. We can also imagine
other changes whose possibility can be stated. The main purpose of such
axiomatizations will be to insure that certain formulas or at least enough
formulas are "satisfiable".
"Completeness" can also be defined as asserting that all "valid"
formulas are provable.
I think all this is related to circumscription, but I can't now
make the connection.
6. Propositional circumscription can be done by introducing worlds as
follows:
Each propositional variable %2p%4i%1 is replaced by a function
%2qpi(w)%1.
We write
%2w1 ≤ w2 ≡ ∀i.[qpi(w1) ⊃ qpi(w2)]%1
where we may suppose that the quantification over ⊗i actually designates
a conjunction over the set of propositional variables to be circumscribed.
Minimality is defined by
%2minimal w ≡ ∀w1.(w1 ≤ w ⊃ w1 = w)%1.
A wff in the original formulation becomes a %2wff(w)%1 in the
new. We write %2wff1_qe_wff2%1 if
%2∀w.(wff1(w) ∧ minimal w ⊃ wff2(w))%1.
5. Further note on propsitional circumscription
Perhaps the right analog of the qF of domain circumscription
is to replace a formula by another that contains a free propositional
parameter - call it π. Circumscribing sets this parameter to whatever
value is convenient. Thus suppose we have
%2leakyboat ⊃ cantgo%1,
and we either do or don't have the additional formula %2leakyboat%1.
The circumscription formula should then be either
%2leaky ∧ (leaky ⊃ π) ⊃ (π ⊃ cantgo)%1
or just
!!j1: %2(leaky ⊃ π) ⊃ (π ⊃ cantgo)%1.
In the former case, taking π to be false leads to a tautology, since the
left side of the main implication is false. In the second case it
leads to the converse
%2¬leakyboat ⊃ ¬cantgo%1
which is a step in the right direction anyway.
Where ({eq j1}) is to come from is still not clear.
.end
This version of MINIMA.NEW[S79,JMC] PUBbed at {time} on {date}.